A course with R, Stan, and brms
Course n°01: Introduction to Bayesian inference, Beta-Binomial model
Course n°02: Introduction to brms, linear regression
Course n°03: Markov Chain Monte Carlo, generalised linear model
Course n°04: Multilevel models, cognitive models
\[\newcommand\given[1][]{\:#1\vert\:}\]
The aim is to build a model that can learn at several levels, that can produce estimates that will be informed by the different groups present in the data. We will follow the following example throughout this course.
Let’s imagine that we’ve built a robot that visits cafés, and that has fun measuring the waiting time after ordering. This robot visits 20 different cafés, 5 times in the morning and 5 times in the afternoon, and measures the time (in minutes) it takes to get a coffee.
cafe afternoon wait
1 1 0 4.9989926
2 1 1 2.2133944
3 1 0 4.1866730
4 1 1 3.5624399
5 1 0 3.9956779
6 1 1 2.8957176
7 1 0 3.7804582
8 1 1 2.3844837
9 1 0 3.8617982
10 1 1 2.5800004
11 2 0 2.7421223
12 2 1 1.3525907
13 2 0 2.5215095
14 2 1 0.9628102
15 2 0 1.9543977
An initial model can be built, estimating the average time (across all cafés combined) to be served.
\[ \begin{align} \color{orangered}{w_{i}} \ &\color{orangered}{\sim \mathrm{Normal}(\mu_{i}, \sigma)} \\ \color{black}{\mu_{i}} \ &\color{black}{= \alpha} \\ \color{steelblue}{\alpha} \ &\color{steelblue}{\sim \mathrm{Normal}(5, 10)} \\ \color{steelblue}{\sigma} \ &\color{steelblue}{\sim \mathrm{HalfCauchy}(0, 2)} \\ \end{align} \]
\[ p(x \given x_{0}, \gamma) = \left(\pi \gamma \left[1 + \left(\frac{x-x_{0}}{\gamma}\right)^{2}\right] \right)^{-1} \]
Second model which estimates one intercept per café. Equivalent to constructing 20 dummy variables.
\[ \begin{align} \color{orangered}{w_{i}} \ &\color{orangered}{\sim \mathrm{Normal}(\mu_{i}, \sigma)} \\ \color{black}{\mu_{i}} \ &\color{black}{= \alpha_{\text{café}[i]}} \\ \color{steelblue}{\alpha_{\text{café}[i]}} \ &\color{steelblue}{\sim \mathrm{Normal}(5, 10)} \\ \color{steelblue}{\sigma} \ &\color{steelblue}{\sim \mathrm{HalfCauchy}(0, 2)} \\ \end{align} \]
Estimate Est.Error Q2.5 Q97.5
b_factorcafe1 3.446599 0.2674826 2.9112647 3.967342
b_factorcafe2 1.730125 0.2615548 1.2190581 2.238199
b_factorcafe3 3.321925 0.2629713 2.8148474 3.833974
b_factorcafe4 2.803049 0.2584659 2.2855814 3.315254
b_factorcafe5 1.464544 0.2627843 0.9555137 1.969844
b_factorcafe6 3.643927 0.2711427 3.1090742 4.194942
b_factorcafe7 2.944457 0.2614577 2.4278226 3.472341
b_factorcafe8 3.172552 0.2660662 2.6328560 3.694486
b_factorcafe9 3.337923 0.2689664 2.8118534 3.865261
b_factorcafe10 3.099324 0.2625124 2.5673710 3.619401
b_factorcafe11 1.916332 0.2571375 1.4105794 2.419089
b_factorcafe12 3.494184 0.2541949 3.0054751 3.998191
b_factorcafe13 3.220359 0.2545903 2.7332581 3.714874
b_factorcafe14 2.626250 0.2633425 2.1031520 3.147996
b_factorcafe15 3.476897 0.2641088 2.9619580 3.991096
b_factorcafe16 2.997862 0.2599787 2.4853711 3.510518
b_factorcafe17 3.884514 0.2633627 3.3741596 4.398071
b_factorcafe18 5.529265 0.2612210 5.0195458 6.061847
b_factorcafe19 2.977062 0.2605880 2.4784251 3.489196
b_factorcafe20 3.364699 0.2530715 2.8744273 3.870453
Couldn’t we ensure that the time measured at café 1 informs the measurement taken at café 2 and café 3? As well as the average time taken to be served? We’re going to learn the priors from the data…
\[ \begin{align} \text{Level 1}: \color{orangered}{w_{i}} \ &\color{orangered}{\sim \mathrm{Normal}(\mu_{i}, \sigma)} \\ \color{black}{\mu_{i}} \ &\color{black}{= \alpha_{\text{café}[i]}} \\ \text{Level 2}: \color{steelblue}{\alpha_{\text{café}}} \ &\color{steelblue}{\sim \mathrm{Normal}(\alpha,\sigma_{\text{café}})} \\ \color{steelblue}{\alpha} \ &\color{steelblue}{\sim \mathrm{Normal}(5, 10)} \\ \color{steelblue}{\sigma_{\text{café}}} \ &\color{steelblue}{\sim \mathrm{HalfCauchy}(0, 2)} \\ \color{steelblue}{\sigma} \ &\color{steelblue}{\sim \mathrm{HalfCauchy}(0, 2)} \\ \end{align} \]
The prior for the intercept of each coffee (\(\alpha_{\text{café}}\)) is now a function of two parameters (\(\alpha\) and \(\sigma_{\text{café}}\)). \(\alpha\) and \(\sigma_{\text{café}}\) are called hyper-parameters, they are parameters for parameters, and their priors are called hyperpriors. There are two levels in the model…
\[ \begin{align} \color{orangered}{w_{i}} \ &\color{orangered}{\sim \mathrm{Normal}(\mu_{i}, \sigma)} \\ \color{black}{\mu_{i}} \ &\color{black}{= \alpha_{\text{café}[i]}} \\ \color{steelblue}{\alpha_{\text{café}}} \ &\color{steelblue}{\sim \mathrm{Normal}(\alpha,\sigma_{\text{café}})} \\ \end{align} \]
NB: \(\alpha\) is defined here in the prior for \(\alpha_{\text{café}}\) but it could, in the same way, be defined in the linear model:
\[ \begin{align} \color{orangered}{w_{i}} \ &\color{orangered}{\sim \mathrm{Normal}(\mu_{i}, \sigma)} \\ \color{black}{\mu_{i}} \ &\color{black}{= \alpha + \alpha_{\text{café}[i]}} \\ \color{steelblue}{\alpha_{\text{café}}} \ &\color{steelblue}{\sim \mathrm{Normal}(0,\sigma_{\text{café}})} \\ \end{align} \]
We can always “remove” the mean from a Gaussian distribution and consider it as a constant plus a Gaussian centred on zero.
NB: when \(\alpha\) is defined in the linear model, the \(\alpha_{\text{café}}\) represent deviations from the mean intercept. It is therefore necessary to add \(\alpha\) and \(\alpha_{\text{café}}\) to obtain the average waiting time per café…
This model has 23 parameters, the general intercept \(\alpha\), the residual variability \(\sigma\), the variability between cafés and one intercept per café.
The James-Stein estimator is defined as \(z = \bar{y} + c(y - \bar{y})\), where \(bar{y}\) is the sample mean, \(y\) is an individual observation, and \(c\) is a constant, the shrinking factor (Efron & Morris, 1977).
The shrinking factor is determined both by the variability (imprecision) of the measurement (e.g., its standard deviation) and by the distance to the mean estimate (i.e., \(y - \bar{y}\)). In other words, this estimator is less “confident” about (i.e., gives less weight to) imprecise and/or extreme observations. In practice, shrinkage acts as a safeguard against overlearning (overfitting).
The shrinkage observed on the previous slide is due to information pooling between cafés. The estimate of the intercept for each café informs the intercept estimates of the other cafés, as well as the estimate of the general intercept (i.e., the overall average waiting time).
There are generally three perspectives (or strategies):
Complete pooling: the waiting time is assumed to be invariant, a common intercept (mod1) is estimated.
No pooling: it is assumed that each café’s waiting time is unique and independent: an intercept is estimated for each café, but without informing the higher level (mod2).
Partial pooling: an adaptive prior is used, as in the previous example (mod3).
The complete pooling strategy generally underfits the data (low predictive capacity) whereas the no pooling strategy amounts to overfitting the data (low predictive capacity here too). The partial pooling strategy (i.e., that of multilevel models) balances underfitting and overfitting.
We can compare these models using indices derived from information theory (extensions of AIC), such as the WAIC (the lower the better).
# computing the WAIC for each model and storing it
mod1 <- add_criterion(mod1, "waic")
mod2 <- add_criterion(mod2, "waic")
mod3 <- add_criterion(mod3, "waic")
# comparing these WAICs
w <- loo_compare(mod1, mod2, mod3, criterion = "waic")
print(w, simplify = FALSE) elpd_diff se_diff elpd_waic se_elpd_waic p_waic se_p_waic waic se_waic
mod3 0.0 0.0 -253.9 8.3 18.3 1.5 507.8 16.6
mod2 -0.9 1.3 -254.8 8.4 19.8 1.6 509.6 16.8
mod1 -57.3 10.6 -311.2 10.5 2.1 0.3 622.3 21.1
We note that model 3 has only 18 effective parameters (pWAIC) and fewer parameters than model 2, whereas it actually has 2 more… posterior_summary(mod3)[3, 1] gives us the sigma of the adaptive priority of \(\alpha_{\text{café}}\) (\(\sigma_{\text{café}} = 0.82\)). Note that this sigma is very low is very low and corresponds to assigning a very restrictive or regularising prior.
The estimates of the first model (complete pooling model) and the third model (partial pooling model) are compared.
Estimate Est.Error Q2.5 Q97.5
b_Intercept 3.121625 0.07923713 2.966771 3.280714
sigma 1.143018 0.05640463 1.036736 1.256459
Estimate Est.Error Q2.5 Q97.5
b_Intercept 3.1247949 0.20583672 2.713431 3.5217680
sigma 0.8215185 0.04361643 0.741872 0.9129304
Both models make the same prediction (on average) for \(\alpha\), but model 3 is more uncertain of its prediction than model 1 (see the standard error for \(\alpha\))…
The \(\sigma\) estimate of model 3 is smaller than that of model 1 because model 3 decomposes the unexplained variability into two sources: variability in waiting time between cafés and the residual variability \(\sigma\).
Let’s assume that our robot doesn’t visit all the cafés the same number of times (as in the previous case) but that it visits more often the cafés close to home…
We can see that cafés that are visited frequently (right) are less affected by the effect of shrinkage. Their estimates are less “pulled” towards the average than the estimates of the least frequently visited cafés (left).
Five (contradictory) definitions identified by Gelman (2005).
Gelman & Hill (2006) suggest instead the use of the terms constant effcts and varying effects, and to always use multilevel modelling, considering that the so-called fixed effect can simply be considered as a random effect whose variance would be equal to \(0\).
Varying the intercepts for each café is simply another way of (adaptively) regularising, that is, reducing the weight given to the data in the estimation. The model becomes able to estimate the extent to which the groups (in this case the cafés) are different, while estimating the characteristics of each café…
Difference between cross-classified (or “crossed”) multilevel models and nested or hierarchical multilevel models. Cross-classified models refer to data structured according to two (or more) non-nested random factors Hierarchical models usually refers to hierarchically structured data (e.g., a student in a class in a school in a city…). See this discussion for more details.
However, the two types of models are written in a similar way, on several “levels”. The term “multilevel” (in our terminology) therefore refers to the structure of the model, to its specification. It is distinct from the structure of the data.
We are now interested in the effect of the time of day on the waiting time. Do we wait more in the morning, or in the afternoon?
\[ \begin{align} \color{orangered}{w_{i}} \ &\color{orangered}{\sim \mathrm{Normal}(\mu_{i}, \sigma)} \\ \color{black}{\mu_{i}} \ &\color{black}{= \alpha_{\text{café}[i]} + \beta_{\text{café}[i]} \times A_{i}} \\ \end{align} \]
Where \(A_{i}\) is a dummy variable coded 0/1 for morning/afternoon and where \(\beta_{\text{café}}\) is therefore a difference parameter (i.e., a slope) between morning and afternoon.
Note: we know that cafés have intercepts and slopes which co-vary… Popular cafés will be overcrowded in the morning and much less in the afternoon, resulting in a significant slope. These cafés will also have a longer average waiting time (i.e., a larger intercept). In these cafés, \(\alpha\) is large and \(\beta\) is far from zero. Conversely, in an unpopular café, the waiting time will be short, as well as the difference between the morning and afternoon’s waiting the.
We could therefore use the co-variation between the intercept and slope to make better inferences. In other words, ensure that the estimate of the intercept informs the estimate of the slope, and reciprocally.
We are now interested in the effect of the time of day on the waiting time. Do we wait more in the morning, or in the afternoon?
\[ \begin{align} \color{orangered}{w_{i}} \ &\color{orangered}{\sim \mathrm{Normal}(\mu_{i}, \sigma)} \\ \color{black}{\mu_{i}} \ &\color{black}{= \alpha_{\text{café}[i]} + \beta_{\text{café}[i]} \times A_{i}} \\ \color{steelblue}{\begin{bmatrix} \alpha_{\text{café}} \\ \beta_{\text{café}} \\ \end{bmatrix}} \ &\color{steelblue}{\sim \mathrm{MVNormal}\bigg(\begin{bmatrix} \alpha \\ \beta \end{bmatrix}, \textbf{S}\bigg)} \\ \end{align} \]
The third line posits that every café has an intercept \(\alpha_{\text{café}}\) and a slope \(\beta_{\text{café}}\), defined by a bivariate (i.e., two-dimensional) Gaussian prior having as means \(\alpha\) and \(\beta\) and as covariance matrix \(\textbf{S}\).
\[\mathbf{x} \sim \mathcal{N}(\boldsymbol{\mu}, \boldsymbol{\Sigma})\]
Where \(\boldsymbol{\mu}\) is a (\(k\)-dimensional) vector of means, for instance: mu <- c(a, b).
\(\boldsymbol{\Sigma}\) is a covariance matrix of \(k \times k\) dimensions, and which corresponds to the matrix given by the function vcov().
\[ \begin{align} \boldsymbol{\Sigma} &= \begin{pmatrix} \sigma_{\alpha}^{2} & \sigma_{\alpha} \sigma_{\beta} \rho \\ \sigma_{\alpha} \sigma_{\beta} \rho & \sigma_{\beta}^{2} \\ \end{pmatrix} \\ \end{align} \]
\[ \begin{align} \boldsymbol{\Sigma} &= \begin{pmatrix} \sigma_{\alpha}^{2} & \sigma_{\alpha} \sigma_{\beta} \rho \\ \sigma_{\alpha} \sigma_{\beta} \rho & \sigma_{\beta}^{2} \\ \end{pmatrix} \\ \end{align} \]
This matrix can be constructed in two different ways, strictly equivalent.
\[ \begin{align} \boldsymbol{\Sigma} &= \begin{pmatrix} \sigma_{\alpha}^{2} & \sigma_{\alpha} \sigma_{\beta} \rho \\ \sigma_{\alpha} \sigma_{\beta} \rho & \sigma_{\beta}^{2} \\ \end{pmatrix} \\ \end{align} \]
The second method is convenient because it considers separately the standard deviations and correlations.
\[ \begin{align} \color{orangered}{w_{i}} \ &\color{orangered}{\sim \mathrm{Normal}(\mu_{i}, \sigma)} \\ \color{black}{\mu_{i}} \ &\color{black}{= \alpha_{\text{café}[i]} + \beta_{\text{café}[i]} \times A_{i}} \\ \color{steelblue}{\begin{bmatrix} \alpha_{\text{café}} \\ \beta_{\text{café}} \\ \end{bmatrix}} \ &\color{steelblue}{\sim \mathrm{MVNormal}\bigg(\begin{bmatrix} \alpha \\ \beta \end{bmatrix}, \textbf{S}\bigg)} \\ \color{black}{\textbf{S}} \ &\color{black}{= \begin{pmatrix} \sigma_{\alpha} & 0 \\ 0 & \sigma_{\beta} \\ \end{pmatrix} \ \textbf{R} \begin{pmatrix} \sigma_{\alpha} & 0 \\ 0 & \sigma_{\beta} \\ \end{pmatrix}} \\ \color{steelblue}{\alpha} \ &\color{steelblue}{\sim \mathrm{Normal} (0, 10)} \\ \color{steelblue}{\beta} \ &\color{steelblue}{\sim \mathrm{Normal} (0, 10)} \\ \color{steelblue}{\sigma_{\alpha}} \ &\color{steelblue}{\sim \mathrm{HalfCauchy} (0, 2)} \\ \color{steelblue}{\sigma_{\beta}} \ &\color{steelblue}{\sim \mathrm{HalfCauchy} (0, 2)} \\ \color{steelblue}{\sigma} \ &\color{steelblue}{\sim \mathrm{HalfCauchy} (0, 2)} \\ \color{steelblue}{\textbf{R}} \ &\color{steelblue}{\sim \mathrm{LKJ}(2)} \\ \end{align} \]
\(\textbf{S}\) is defined by factoring \(\sigma_{\alpha}\), \(\sigma_{\beta}\), and the correlation matrix \(\textbf{R}\). The next lines of the model simply define priors for constant effects. The last line specifies the prior for \(\textbf{R}\).
Prior proposed by Lewandowski et al. (2009). A single parameter \(\zeta\) (zeta) specifies the concentration of the distribution of the correlation coefficient. The \(\mathrm{LKJ}(2)\) prior defines an uninformative prior for \(\rho\) (rho) which is sceptical of extreme correlations (i.e., values close to \(-1\) or \(1\)).
The brms package uses the same syntax as R base functions (like lm) or the lme4 package.
The left-hand side defines the dependent variable (or “outcome”, i.e., what we are trying to predict).
The first part of the right-hand side of the formula represents the constant effects (fixed effects), whereas the second part (between parentheses) represents varying effects (random effects).
The first model above contains only an intercept variable, which varies by Subject. The second model also contains a variable intercept, but also a variable slope for the effect of Days.
When including several varying effects (e.g., an intercept and a slope), brms assumes that we also want to estimate the correlation between these two effects. Otherwise, we can remove this correlation (i.e., set it to 0) using ||.
We specify an intercept and a slope (for the afternoon effect) which vary by cafe.
post <- as_draws_df(x = mod5) # extracting posterior samples
R <- rethinking::rlkjcorr(n = 16000, K = 2, eta = 2) # samples from prior
data.frame(prior = R[, 1, 2], posterior = post$cor_cafe__Intercept__afternoon) %>%
gather(type, value, prior:posterior) %>%
ggplot(aes(x = value, color = type, fill = type) ) +
geom_histogram(position = "identity", alpha = 0.2) +
labs(x = expression(rho), y = "Number of samples")We compare the first model (complete pooling model), the third model (partial pooling model), and the last model (with varying intercept and slope).
mod5 <- add_criterion(mod5, "waic")
w <- loo_compare(mod1, mod2, mod3, mod5, criterion = "waic")
print(w, simplify = FALSE) elpd_diff se_diff elpd_waic se_elpd_waic p_waic se_p_waic waic se_waic
mod5 0.0 0.0 -155.1 10.0 26.4 2.6 310.3 20.1
mod3 -98.8 8.3 -253.9 8.3 18.3 1.5 507.8 16.6
mod2 -99.7 8.3 -254.8 8.4 19.8 1.6 509.6 16.8
mod1 -156.0 13.7 -311.2 10.5 2.1 0.3 622.3 21.1
mod1 mod2 mod3 mod5
1.739131e-68 5.080298e-44 1.273836e-43 1.000000e+00
The estimate of the average waiting time is more uncertain when we takes into account new sources of error. However, the overall error of the model (i.e., what is not explained), the residual variation \(\sigma\), decreases…
Estimate Est.Error Q2.5 Q97.5
b_Intercept 3.121625 0.07923713 2.966771 3.280714
sigma 1.143018 0.05640463 1.036736 1.256459
Estimate Est.Error Q2.5 Q97.5
b_Intercept 3.1247949 0.20583672 2.713431 3.5217680
sigma 0.8215185 0.04361643 0.741872 0.9129304
Estimate Est.Error Q2.5 Q97.5
b_Intercept 3.7355470 0.21026789 3.3160121 4.1495370
b_afternoon -1.2319627 0.08702830 -1.4031316 -1.0608732
sigma 0.4898022 0.02720104 0.4394355 0.5462991
Multilevel models (or “mixed-effects models”) are natural extensions of classical (single-level) regression models, where classical parameters are themselves assigned “models”, governed by hyper-parameters.
This extension makes it possible to make more precise predictions by taking into account the variability related to groups or structures (clusters) present in the data. In other words, by modelling the populations from which the varying effects are drawn (e.g., the population of participants or stimuli).
A single-level regression model is equivalent to a multilevel model where the variability of varying effects would be fixed at \(0\).
The Bayesian framework allows a natural interpretation of distributions from which the varying effects come. Indeed, these distributions can be interpreted as prior distributions, whose parameters are estimated from the data.
Reaction Days Subject
1 249.5600 0 308
2 258.7047 1 308
3 250.8006 2 308
4 321.4398 3 308
5 356.8519 4 308
6 414.6901 5 308
7 382.2038 6 308
8 290.1486 7 308
9 430.5853 8 308
10 466.3535 9 308
11 222.7339 0 309
12 205.2658 1 309
13 202.9778 2 309
14 204.7070 3 309
15 207.7161 4 309
16 215.9618 5 309
17 213.6303 6 309
18 217.7272 7 309
19 224.2957 8 309
20 237.3142 9 309
It’s up to you to build the mathematical models and brms models corresponding to the following models:
Days.Days + a random effect of Subject (varying intercept).Days + a random effect of Subject (varying intercept + varying slope for Days).Then, compare these models using model comparison tools and conclude.
fmod0 <- lm(Reaction ~ Days, sleepstudy)
fmod1 <- lmer(Reaction ~ Days + (1 | Subject), sleepstudy)
fmod2 <- lmer(Reaction ~ Days + (1 + Days | Subject), sleepstudy)
anova(fmod1, fmod2)Data: sleepstudy
Models:
fmod1: Reaction ~ Days + (1 | Subject)
fmod2: Reaction ~ Days + (1 + Days | Subject)
npar AIC BIC logLik deviance Chisq Df Pr(>Chisq)
fmod1 4 1802.1 1814.8 -897.04 1794.1
fmod2 6 1763.9 1783.1 -875.97 1751.9 42.139 2 7.072e-10 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# computing and storing the WAIC of each model
mod6 <- add_criterion(mod6, "waic")
mod7 <- add_criterion(mod7, "waic")
mod8 <- add_criterion(mod8, "waic")
# comparing the WAICs of these models
w <- loo_compare(mod6, mod7, mod8, criterion = "waic")
print(w, simplify = FALSE) elpd_diff se_diff elpd_waic se_elpd_waic p_waic se_p_waic waic se_waic
mod8 0.0 0.0 -860.5 22.3 33.0 8.4 1721.0 44.6
mod7 -24.2 11.5 -884.7 14.5 19.4 3.3 1769.5 28.9
mod6 -92.8 20.9 -953.3 10.6 3.2 0.5 1906.6 21.1
mod6 mod7 mod8
5.015604e-41 2.977648e-11 1.000000e+00
…
Assumptions (not arbitrary!)…
“One of the most basic problem in scientific inference is the so-called inverse problem: How to figure out causes from observations. It is a problem, because many different causes can produce the same evidence. So while it can be easy to go forward from a known cause to predicted observations, it can very hard to go backwards from observation to cause” (McElreath, 2020)…
…
…
…
Bayesian inference is a general approach to parameter estimation. This approach uses probability theory to quantify the uncertainty with respect to the value of the parameters of statistical models.
These models are composed of different blocks (e.g., likelihood function, priors, linear or non-linear model) which are modifiable as desired. What is classically called “conditions of application” are simply the consequences of modelling choices. In other words, the user defines (and does not suffer) the conditions of application.
We have seen that the linear regression model provides a very flexible architecture which makes possible to describe, via the modification of the likelihood function and via the introduction of link functions, complex (e.g., non-linear) relationships between outcomes and predictors. These models can gain in precision by taking into account the variability and structures present in the data (cf. multilevel models).
The brms package is a real Swiss army knife of Bayesian statistics in R. It allows you to fit almost any type of regression model. This includes all models that we have seen, but also many others. Among others, multivariate models (i.e., models with several outcomes), “distributional” models (e.g., to predict variance differences), generalized additive models, Gaussian processes (Gaussian processes), models from signal detection theory, mixture models, drift diffusion models, non-linear models…
Do not hesitate to contact me for more information on these models or if you have any questions about your own data. You can also contact the creator of the brms package, who is very active online (see his site). See also the Stan forum.
Ladislas Nalborczyk - IBSM2023